IE 11 is not supported. For an optimal experience visit our site on another browser.

Unpacking AI: "an exponential disruption" with Kate Crawford: podcast and transcript

Chris Hayes speaks with AI expert Kate Crawford about the social and political implications of the rise of artificial intelligence.

You might be feeling that artificial intelligence is starting to seem a bit like magic. Our guest this week points out that AI, once the subject of science fiction, has seen the biggest rise of any consumer technology in history and has outpaced the uptake of TikTok, Instagram and Facebook. As we see AI becoming more of an everyday tool, students are even using chatbots like ChatGPT to write papers. While automating certain tasks can help with productivity, we’re starting to see more examples of the dark side of the technology. How close are we to genuine external intelligence? Kate Crawford is an AI expert, research professor at USC Annenberg, honorary professor at the University of Sydney and senior principal researcher at Microsoft Research Lab in New York City. She’s also author of “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.” Crawford joins WITHpod to discuss the social and political implications of AI, exploited labor behind its growth, why she says it’s “neither artificial nor intelligent,” climate change concerns, the need for regulation and more.

Note: This is a rough transcript — please excuse any typos.

Kate Crawford: We could turn to OpenAI's own prediction here, which is that they say 80 percent of jobs are going to be automated in some way by these systems. That is a staggering prediction.

Goldman Sachs just released a report this month saying 300 jobs in the U.S. are looking at, you know, very serious forms of automation impacting what they do from day-to-day. So, I mean, it's staggering when you start to look at these numbers, right?

So, the thing that I think is interesting is to think about this historically, right? We could think about the Industrial Revolution. It takes a while to build factory machinery and train people on how things work.

We could think about the transformations that happened in the sort of early days of the personal computer. Again, a slow and gradual rollout as people began to incorporate this technology. The opposite is happening here.

Chris Hayes: Hello and welcome to "Why Is This Happening?" with me, your host, Chris Hayes. There's a famous Arthur C. Clarke quote that I think about all the time. He was a science fiction writer and futurist and he wrote a book called "Profiles of the Future: An Inquiry into the Limits Possible" and this quote, which you probably have caught at one point or another is that, "Any sufficiently advanced technology is indistinguishable from magic."

And there's something profound about that. I remember the first time that, like, I saw Steve Jobs do the iPhone presentation. And then, the first one I held in my hand, it really did feel like magic. It felt like a thing that formerly wasn't possible, that I knew what the sort of laws of physics and technology were and this thing came along and it seemed to break them, so it felt like magic.

I remember feeling that way the first time that I really started to get on the graphical version of the internet. Even before that when I got on the first version of the internet. Like, oh, I have a question about a thing. You know, this baseball player Rod Carew, what did he hit in his rookie season? Right away, right? Magic. Magically, it appears in front of me.

And I think a lot of people have been having the feeling about AI recently. There's a bunch of new, sort of public-facing, machine learning, large language model pieces of software. One is ChatGPT, which I've been messing around with.

There's others for images. One called Midjourney and a whole bunch of others. And you’ve probably seen the coverage of this because it seems like in the last two months it's just gone from, you know, nowhere and people talk about AI and the algorithm machine learning tool, like, holy smokes.

And I got to say, like, we're going to get into the ins and outs of this today. But at the sort of does it feel like magic level, like, it definitely feels like magic to me.

I went to ChatGPT. I was messing around with it. I told it to write a standup comedy routine in the first person of Ulysses S. Grant about the Siege of Vicksburg using, like, specific details from the battle and it came back with, like, you know, I had to hide my soldiers the way I hide the whiskey from my wife, which is like, you know, he was, you know, notoriously had a drinking problem although tended to not around his wife. So, it was, like, slightly off that way.

But it was like a perfectly good standup routine about the Siege of Vicksburg in the first person of Ulysses S. Grant, and it was done in five seconds. Obviously, we're going to get into all sorts of, you know, I don't think it's going to be like taking over for us, but the reason it felt like magic to me is I know enough about computers and the way they work that I can think through like when my iPhone's doing something, when I'm swiping, I can model what's happening.

Like, there's a bunch of sensors in the actual phone. Those sensors have a set of programming instructions to receive the information of a swipe and then compare it against a set of actions and figure out which one is closest to and then do whatever the command is.

And, you know, I've programmed before, and I can reason out what it's doing. I can reason out what, like, my car is doing. I understand basically how an internal combustion engine works and, you know, the pistons. And I just have no idea what the hell is happening inside this thing that when I told it to do this, it came back with something that seemed like the product of human intelligence. I know it's not. We're going to get into all of it, but it's like it does seem to me like a real step change.

You know, a lot of people feel that way. Now, it so happens that this is something that I studied as an undergraduate and thought a lot about. And there's a long literature about artificial intelligence and human intelligence and we're going to get into all that today.

But because this is so front-of-mind, because this is such an area of interest for me, I'm really delighted to have on today's program Kate Crawford. This is Kate Crawford's life’s work. She's an artificial intelligence expert. She studies the social and political implications of AI.

She's a Research Professor at USC Annenberg, Honorary Professor at University of Sydney, Senior Principal Researcher at Microsoft Research Lab in New York City.

She's the author of "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence." A lot of the things that I think have exploded onto public consciousness in the last few months have been the subject of work that she's been thinking about and doing for a very long time.

So, Kate, it's great to have you in the program.

Kate Crawford: Thanks for having me, Chris.

Chris Hayes: Does it feel like magic to you?

Kate Crawford: I'll be honest. There is definitely the patina of magic. There's that feeling of how is this happening. And to some degree, you know, I've been taken aback at the speed by which we've gotten here. I think anybody who's been working in this field for a long time will tell you the same thing.

Chris Hayes: Oh, really? This feels like a step change to you --

Kate Crawford: Oh, yeah.

Chris Hayes: -- like we're in a new --

Kate Crawford: Yeah. This feels like an inflection point, I would say, even bigger than a step function change. We're looking at --

Chris Hayes: Right.

Kate Crawford: -- a shift that I think is pretty profound and, you know, a lot of people use the iPhone example or the internet example. I like to go even further back. I like to think about the invention of artificial perspective, so we can go back into the 1400s where you had Alberti outline a completely different way of visualizing space, which completely transformed art and architecture and how we understood the world that we lived in.

You know, it's been described as a technology that shifted the mental and material worlds of what it is to be alive. And this is one of those moments where it feels like a perspectival shift that can feel magic. But I can assure you, it is not magic and that’s --

Chris Hayes: No, I know --

Kate Crawford: -- where it gets interesting.

Chris Hayes: OK. I know it's not. I'm just being clear. Obviously, I know it's not magic. And also, I actually think the Arthur C. Clarke quote is interesting because there's two different meanings, right?

So, it feels like magic in the sense of, like, things that are genuine magic, right, that in a fantastical universe, they're miracles, right? Or it feels like magic in that, like, when you're around an incredible magician, you know that the laws of physics haven't been suspended but it sure as heck feels like it, right?

Kate Crawford: Oh, yeah.

Chris Hayes: And that's how this feels to me. Like, I understand that this is just, you know, a probabilistic large language learning model that then we'll get into how this is working. So, I get that.

But it sure as heck on the outcome line, you know, feels like something new. The perspectival shift is a really interesting idea. Why does that analogy draw you?

Kate Crawford: Well, let's think about these moments of seeming magic, right? So, there is just decades of examples of this experience. And in fact, we could go all the way back to the man who invented the first chatbot. This is Joseph Weizenbaum. And in the 1960s when he's at MIT, he creates a system called ELIZA. And if you're a person of a certain age, you may remember when ELIZA came out. It's really simple, kind of almost set of scripts that will ask you questions and elicit responses and essentially have a conversation with you.

So, writing in (ph) the 1970s, Weizenbaum was shocked that people were so easily taken in by this system. In fact, he uses a fantastic phrase around this idea that there is this powerful delusional thinking that is induced in otherwise normal people the minute you put them in front of a chatbot.

We assume that this is a form of intelligence. We assume that the system knows more than it does. And, you know, the fact that he captured that in this fantastic book called "Computer Power and Human Reason" back in 1978, I think, hasn’t changed that that phenomenon, when we open up ChatGPT, you really can get that sense of, OK, this is a system that really feels like I'm talking to, at least if not a person, a highly-evolved form of computational intelligence.

And I think what's interesting about this perspectival shift is that, honestly, this is a set of technologies that have been pretty well known and understood for some time. The moment of change was the minute that OpenAI put it into a chat box and said, hey, you can have a conversation with a large language model.

That's the moment people started to say this could change every workplace, particularly white-collar workplaces. This could change the whole way that we get information. This could change the way we understand the world because this system is giving you confident answers that can feel extremely plausible even when they make mistakes, which they--

Chris Hayes: Yes.

Kate Crawford: -- frequently do.

Chris Hayes: So, I mean, part of that, too, is like, you know, humans see faces in all kinds of places where there aren’t faces, right? We project inner lives onto our pets. You know, we have this drive to mentally model other consciousnesses, partly because of the intensely inescapable social means by which we evolved.

So, part of it is in the same way that magicians taking advantage of certain parts of our perceptual apparatus, right, like we're easily distracted by, like, loud motions, right? It's doing that here with our desire to impute consciousness in the same way that, like, we have a whole story about what's going on in a dog's mind when it gets out into the park.

Kate Crawford: Exactly.

Chris Hayes: But, like, I'm not sure it's correct.

Kate Crawford: That is it. And I actually think the magician's trick analogy is the right one here because it operates on two levels. First, we're contributing half of the magic by bringing those, you know, anthropomorphic assumptions into the room and by playing along.

We are literally training the AI model with our responses. So, when it says something and we say, oh, that's great. Thanks. Could I have some more? That’s a signal to the system this was the correct answer.

If you say, oh, that doesn't seem to match up, then it takes that as a negative --

Chris Hayes: Right.

Kate Crawford: -- signal. So, we are literally training these systems with our own intelligence. But there's another way we could think about this magician's trick because while this is happening and while our focus is on, oh, exciting LLMs, there's a whole other set of political and social questions that I think we need to be asking that often get deemphasized.

Chris Hayes: There's a few things here. There's the tech, there's the kind of philosophy, and then there's the, like, political and social implication.

So, just start on the tech. Let's go back to the chatbot you're talking about before, ELIZA. So, there's a bunch of things happening here in a chatbot like ChatGPT that are worth breaking down.

The first is just understanding natural language and, you know, I did computer science as an undergraduate and philosophy and philosophy of mind and some linguistics when I was an undergraduate 25 years ago. And at that time, like, natural language processing was a huge unsolved problem.

You know, we all watched "Star Trek". Computer, give me this. And it's like getting that computer understand a simple sentence is actually, like, wildly complex as a computational problem. We all take it for granted, but it seems like even before you get into what it's giving you back, I mean, now, it's embedded in our lives, Siri, all this stuff.

Like how did we crack that? Is there a layperson's way to explain how we cracked natural language processing?

Kate Crawford: I love the story of the history of how we got here because it gives you a real sense of how that problem has been, if not cracked, certainly seriously advanced. So, we could go back to the sort of prehistory of AI. So, I think sort of 1950s, 1960s.

The idea of artificial intelligence then was something called knowledge-based AI or an expert systems approach. The idea of that was that to get a computer to understand language, you had to teach it to understand linguistic principles, high-level concepts to effectively understand English like the way you might teach a child to understand English by thinking about the principles and thinking about, you know, here's why we use this sort of phrasing, et cetera.

Then something happens in around the 1970s and early 1980s, a new lab is created at IBM, the continuous-speech recognition lab, the CSR lab. And this lab is fascinating because a lot of key figures in AI are there, including Robert Mercer who would later become famous as the, shall we say, very backroom-operator billionaire who funded people like Bannon and the Trump campaign.

Chris Hayes: Yup.

Kate Crawford: Yes, and certainly, the Brexit campaign.

Chris Hayes: Yup.

Kate Crawford: So, he was one of the members of this lab that was headed by Professor Jelinek, and they had this idea. They said instead of teaching computers to understand, let's just teach them to do pattern recognition at scale.

Essentially, we could think about this as the statistical turn, the moment where it was less about principles and more about patterns. So, how do you do it? To teach that kind of probabilistic pattern recognition, you just need data. You need lots and lots and lots of linguistic data, just examples.

And back then, even in the, you know, 1980s, it was hard to get a corpus of data big enough to train a model. They tried everything. They tried patents. They tried, you know, IBM technical manuals, which, funnily enough, didn't sound like human speech. They tried children's books.

And they didn't get a corpus that was big enough until IBM was actually taken to court. This was like a big antitrust case where it went for years. They had, like, a thousand witnesses called. And in this case, this produces the corpus that they used to train their model. Like honestly, you couldn’t make this stuff up. It’s wild (ph).

Chris Hayes: Is that right?

Kate Crawford: Oh, absolutely. So, they have a breakthrough which is that it is all about scale. And so interestingly --

Chris Hayes: Right.

Kate Crawford: -- Mercer has this line, you know, which is fantastic. There's a historian of science, Tsao-Cheng Lee (ph) who's written about this moment. But, you know, Mercer says, it was one of the rare moments of government being useful despite itself. That was how --

Chris Hayes: Boo.

Kate Crawford: -- he justified this case, right?

So, we see this changed towards basically it's all about data. So, then we have the years of the internet. Think about, you know, the early 2000s. Everyone's doing blogs, social media appears, and this is just grist to the mill. You can scrape and scrape and scrape and create larger and larger training data sets.

So, that's basically what they call these, foundational data sets, which are used to see these patterns. So, effectively, LLMs are advanced pattern recognizers that do not understand language, but they are looking for, essentially, patterns and relationships between the text that they’ve been trained on, and they use this to essentially predict the next word in a sentence. So, that's what they're aimed to do.

Chris Hayes: This statistical turn is such an important conceptual point. I just want to stay on it because I think this, like, really helped. And this turn happened before I was sort of interested in natural language processing. But when we were talking about natural language processing, we're still talking in this old model, right?

Well, you teach kids these rules, right, and you teach them or if you learn a second language, like, you learn verb conjugation, right? And you're running them through these rules, like, OK, that's a first person. There's this category called first person. There's a category called verb then a conjugate. There's category of conjugation. One plus one plus one equals three. That gives me, you know, yo voy (ph). OK.

So, that’s this sort of principle, rule-based way of sort of understanding language and natural language processing. So, the statistical turn says throw all that out. Let's just say if someone says “thanks” what's likely to be the next word?

And you see this in the Gmail auto complete.

Kate Crawford: Yup.

Chris Hayes: When you say “thanks” and it will light up “so much”. It's just that thanks so much goes together a lot. So, when you put in thanks, it's like pretty good chance it's going to be so much.

And that general principle of if you run enough data and you get enough probabilistic connections between this and that word at scale, is how you get Ulysses S. Grant doing a joke about Vicksburg and hiding his troops the way he hides whiskey from his wife.

Kate Crawford: Exactly. And you could think about all of the words and that joke is being in a kind of big vector space or word cloud where you'd have Ulysses S. Grant, you'd have whiskey, you'd have soldiers, and you can kind of think about the ways in which they would be related.

And the funny thing is trying to write jokes with GPT, some of the time, it's really good and some of the time, it's just not funny at all because it's not --

Chris Hayes: Right. Sure.

Kate Crawford: -- coming from a basis of understanding humor or language.

Chris Hayes: No.

Kate Crawford: It's essentially doing this very large word association game.

Chris Hayes: Right. OK. So, I understand this principle. Like I get it. It's a probabilistic model that is trained on a ton of data and because it's trained on so much data and because it's using a cycle amount of processing power.

Kate Crawford: Oh, yes.

Chris Hayes: Like a genuinely crazy and, like, expensive and carbon intensive. So like, it's like running a car like a huge Mack truck, right?

Kate Crawford: Oh, yeah.

Chris Hayes: It's working its butt off to give me this, my dumb little Vicksburg joke. So, like, I get that intuitively, but maybe, like, if we could just go to the philosophy place, it’s like, OK, it doesn't understand. But then we're at this question of, like, all right, well what does understanding mean, right?

Kate Crawford: Right.

Chris Hayes: And this is where we start to get into this sort of philosophical AI question. And there's a long line here. There's Alan Turing's Turing test, which means we should explain to folks who don't know that. There's John Searle's Chinese box example, which we should also probably take a second.

But basically, for a long time, this question of, like, what does understanding mean? And if you encountered an intelligence that acted as if it were intelligent, at what point would you get to say it's intelligent without peering into what it's doing on the inside to produce the thing that makes it seem intelligent.

And the Turing test, is Alan Turing, the brilliant British mathematician, basically says, if you can interact with a chatbot that fools you, that's intelligence. And it just feels like, OK, well, ChatGPT, I think, is passing it. It feels like it passes the Turing test at least in some circumstances, yes?

Kate Crawford: So, I mean, this is really interesting because in Turing's work, this was less a test. He doesn’t actually propose that it's a serious test but as a philosophical question.

Chris Hayes: Right. Yes.

Kate Crawford: And interestingly, he proposes it as you would have, you know, a human and a computational system and a judge. And so, it's very much a relational process between a person trying to decide if this response came from a human being or a computer.

And what's interesting about the way that we think about human intelligence is that it isn’t this sort of brain in a box. It isn’t a sort of, you know, Descartes vision of, you know, the body is over here, and the brain is over here. This is sort of Cartesian dualism, which I think infects a lot of computer science discourse, has been shown by, you know, centuries of philosophy but particularly since the 20th century that it skips all of that physical embodiment, the relationality, the sociality of intelligence.

It's how we form intelligence collectively that is so specific to our species rather than just being, you know, survival ants that are kind of just doing their own thing. And, again, another great example of a collective intelligent species.

So, there's been lots of pushback on how to think about intelligence here and in particular, this idea that just performing a set of tasks equals that's human intelligence, I think, is really narrow and really problematic. And it's something that, you know, I think has also been a problem of the current AI debate is that it really downgrades not just human intelligence and all the things that we do with it, which go far beyond, you know, Q&A, but it also fails to look behind the curtain of how these systems work.

So, you mentioned two really important things. One is all of the training data. Second is computational scale. And we are talking about gargantuan amounts of compute. I mean, you could think about this as one of the biggest engineering projects in human history, and I'm not exaggerating.

The amount of compute to make something like GPT work is astronomical. Whatever you're thinking of, triple it. So, that's number two.

But number three is humans. We've talked about this sort of statistical reasoning. There's also a level inside GPT which is called reinforcement learning human feedback, and RLHF is this kind of new secret sauce that has being added in to make these systems better. So, they can't produce a lot of stuff that really doesn't make sense.

But if you have this human layer of people really checking the answers, giving the system better feedback, saying, hey, this works and this doesn't, that is one of the really important things that makes this system work. It's actual humans.

So, when you pull back the AI curtain, guess what? You find more people and they tend to be people based in the global south being paid two dollars or less per hour. So, there's a really important story --

Chris Hayes: Wow.

Kate Crawford: -- about the labor exploitation that goes behind the making of this appearance of intelligence.

Chris Hayes: Wait, that's happening now?

Kate Crawford: Oh, yeah. There's an amazing investigation that was published by "TIME" magazine around OpenAI using crowd workers specifically based in Kenya.

Chris Hayes: And they're just like doing feedback training?

Kate Crawford: Mm-hmm.

Chris Hayes: Oh, wow, that’s wild. Oh, my goodness.

Kate Crawford: Yeah.

Chris Hayes: Yeah. There's sort of factory line assembly outsourced worker at the core of the next frontier of automated intelligence.

Kate Crawford: This is kind of one of the big focuses of my book, "Atlas of AI". The reason I have, like, a whole chapter focused on labor, the human labor that is behind the curtain, is because the story isn't told enough. We get, I think, so impressed by the kind of magic of the system that we don't look at what does it take to make these things work.

It takes data at scale. It takes enormous amount of natural resources. And it takes a lot of labor and particularly, all along the supply chain, but in particular, this level of crowd work gets missed out of the story so much and it's a really important part of how it works.

Chris Hayes: So, all of these ingredients are getting put in, right? But then there's this question of, like, exponential development, and I think your point here is really an important one. Like, it's all just built on people.

The reason it can predict that “thanks so much” goes together is, like, actual humans keep saying thanks so much. It's not generating that. It's just recognizing that pattern and all this stuff.

You see that to me, the place where this is even more apparent is with the illustration AIs because, there, it really feels like you're just ripping off people that actually made illustrations and you fed their work in without any intellectual property, and now that's what you're going to train this model under and replace them.

The language stuff feels less that way because everybody talks basically or communicates in some way. You know, artistry, illustration is like a specific skill set and some people are really good at it. And they put their stuff in the internet, which has now been fed into this thing that is just essentially, like, appropriating that data to replace the people that used to make or still make this stuff. That feels pretty messed up to me.

Kate Crawford: Well, this is one of the biggest stories. Part of the reason I think about the AI industry as sort of the extractive industry of the 21st century is that it's premised on extracting all of those examples, text and images and video, in order to train these systems.

So, again, it really is about, you know, a type of capture of the commons. These things that we previously understood that we were sharing with each other or, you know, putting on MySpace or Facebook as we were sharing --

Chris Hayes: Right.

Kate Crawford: -- with our friends, remember that?

Chris Hayes: Right.

Kate Crawford: But actually, you were training these very large models in which you have no property rights. I mean, it's interesting that if you look at the current copyright debates around LLMs, there’s a whole series of lawsuits are happening right now.

We've got photographers, programmers and illustrators that are all suing various companies in this ecosystem, including, you know, OpenAI and Stability AI. And it's going to be interesting to see how these pan out. The courts haven’t, you know, come out with the sort of final decisions yet.

But reading the tea leaves, it really looks like because the scraping of the internet has, in previous cases, been defined as fair use, as in if you're taking it and turning it into something else, this is OK.

Chris Hayes: Right.

Kate Crawford: You're not just completely spitting out the same thing that you copied.

Chris Hayes: No.

Kate Crawford: This constitutes as a transformative process and, therefore, really you're not actually infringing these people's copyright. But I think there's a deeper moral question here, which is, you know, how are we going to ensure that these models don't create a massive deprofessionalization across the creative sector. I'm talking about everything from illustrators, photographers, technical writers.

There are all of these specific employment categories, according to the Bureau of Labor Statistics that these are all the, kind of, the creative professions, that are vulnerable to deprofessionalization by these tools. So, how do we think about ways of continuing some form of credit and compensation if you don't get to have consent.

If you think about the three Cs, right, like consent, credit and compensation, we've lost on consent but maybe credit and compensation is possible here.

Chris Hayes: All right. I want to hold that thought because I want to get to this sort of implications. But I just want to stay on the philosophical point a little further because it's like, OK, at a certain point, I guess my point is, yes, it's based on human input intelligence. Yes, it's computing at scale.

But the more sophisticated the emergent behavior gets, I mean, I don't know, at certain point, you could keep reiterating that which, again, fine. But the profound philosophical question is like, well, my neurons are doing something like that. They're engaged in an incredibly complex set of calculations and computations at the most minute level.

I'm a secular person, so I don't think it's like this, you know, voice of the divine inside me that's my soul. I think that there's an emergent thing that happens where all those cells are doing a bunch of stuff and out of that, emerges a thing called consciousness, which is like my little self, sitting in my little head, pulling the levers and thinking thoughts, you know.

At a certain point, you know, whatever the underlying substrate is, right, in my sense, like, my neurons and the chemistry and carbon-based life form or like, you know, computers of a sufficient level of computational power, I guess the question, like, when does it start to cross into something that is more than a magic trick basically is my question to you as someone who spent your whole life thinking about this because I agree with you, it's not there yet.

It is a magic trick. It's a really good magic trick. It's a very useful magic trick, in fact, which we're going to get to.

Kate Crawford: Right.

Chris Hayes: But it's still a magic trick. But I'm just not convinced it's going to stay a magic trick.

Kate Crawford: You have put your finger on one of the biggest divides in the AI field right now, which is I think about it this way. On one side, we have what you might call AI spiritualists who genuinely believe that these techniques of building large language models are bringing about emergent behavior that will produce something called artificial general intelligence or in other language, the super intelligence or the singularity, right?

So, this is the idea that we are effectively going to get to a place where we are training AI systems that cannot only do everything that humans do better but could actually start to control the world. Now, you might have seen a highly controversial op-ed that came out a week ago that was arguing that all AI training should be shut down completely and in fact, we should be --

Chris Hayes: Yes.

Kate Crawford: -- bombing data centers from the air and being prepared to use on nuclear capacity to take out countries who continue to do large-scale AI training runs. I think that’s pretty extreme and I have a lot of problems --

Chris Hayes: Yeah.

Kate Crawford: -- with that framing. But you could really put that at one extreme of this idea of like the AI spiritualists who say, we are creating out-of-control digital minds.

At the other end, we can think about what I think of as AI materialists, and I'm definitely in that category, people who say, let's actually see how AI is really built. Let's look at how it really works.

Let's open the hood. Let's look at the statistical reasoning properties. Let's look at the data. Let's look at the large amounts of compute. Let's actually look at the infrastructure that it takes to make this work at scale. Let's look at the crowd workers and the people in the background and say, what are the planetary costs of systems like this, what does that actually look like, and are these emergent properties really just a property of, you know, an algorithm or are they actually emergent properties of what happens when you get many thousands of people working on the systems night and day and injecting so much of their own human intelligence --

Chris Hayes: Right.

Kate Crawford: -- to create this essential experience of increasing intelligence? Now, there's lots of positions in the middle, too, and personally, I am convinced that we are seeing forms of emergent behavior, but that doesn't mean intelligent behavior, that doesn't mean this is autonomous. It is deeply embedded in a whole lot of tech industries that are doing a lot of work right now --

Chris Hayes: Yeah.

Kate Crawford: -- to create those emergent properties.

Chris Hayes: It's a fascinating philosophical question. I mean, it's the sort of the philosophical question in some ways, right, what's the origin of human consciousness, why are we separate and different than the other species that inhabited the Earth as we think we are, why do we have this thing that, you know, that rattles around inside our head which is called the self and, you know, how is that distinct and is that replicable through, again, some substrate that's not that specific human brain, right?

But putting all those aside, so now let's go back to the thing we have in front of us because that's an interesting philosophical question. It's an interesting question for tail risk, right, the idea that the machines take over and they eradicate us, or they enslave us in some, you know, Matrix-like situation.

But let's just put that aside. Here's the thought I had the other day. I have an 11-year-old, a 9-year-old and a 5-year-old. So, I do a fair amount of kid homework, you know, checking kid homework. And I had this thought about AI the other day which was this. If you've got a sixth grader, she's writing five-paragraph essays, right?

She got her English and history and language arts homework and then she got her math homework. So, math homework might be dividing two fractions by each other, long division, and there's a whole bunch of computational stuff that just you never do as an adult because you just get a calculator to do it.

Now, we understand that. My sixth grader understands that calculators can do this. But you learn how to do it because it's part of that building up the structure of mathematical knowledge. You exercise those brain muscles. You're going to need people to build on those skills to do other stuff further on.

But we understand when you're doing math, computational math with a sixth grader or a third grader that much of it is just already replaced by a machine, you're not going do it in adult life. That's not true with the five-paragraph essay until literally the last six weeks.

Kate Crawford: Right.

Chris Hayes: I mean, I honestly, like, had this feeling sitting there, it's like, oh, a sixth grader's five-paragraph essay is essentially now the equivalent of the long division. Like you really can get a machine to turn out a perfectly good five-paragraph essay at sixth-grade level.

Kate Crawford: Exactly.

Chris Hayes: Right now. What does that mean?

Kate Crawford: Right.

Chris Hayes: What does that mean for us? What does mean for education? What does that mean for knowledge work and reading and writing work? Like what does that mean?

Kate Crawford: I have to say as somebody who is also the homework counselor of a fifth grader, I have also had this exact experience and set of questions. And I think what the first thing that comes to mind is that we are looking at the first significant challenge to a 500-year model of education at such a significant depth that I think we are looking at a transformation of how we even train or think about assessing kids, and not just K-12.

I'm talking about university education is going to have to significantly reframe what are we doing here because the old model of getting a professor who stands up at the front and says a bunch of things and you take some notes and I'm able to kind of really get the ideas such that you can regurgitate them in an essay, that form of assessment is done and that's a really big thing that has happened literally this year. And that's just for starters. I honestly think this is the first year, you know, really 2023, where the hot new programming language for computer science is English.

Chris Hayes: Yeah.

Kate Crawford: You can just type in what you want into GPT and say, I would like a program that does this, and you get it out. And for somebody like me who, like, you know, I majored in (ph) computer science a long time ago, but these days I've been really studying the social impacts of the systems, I can now sit there and code with any of my colleagues competently up to a certain point. I mean, let's be clear, it's got some limitations. But that is a huge change.

Chris Hayes: This alone, even if it were just this, and again, this gets back to this natural language question, right, because computer languages are built from the ground up from ones and zeros to machine language to languages on top of that and they sort of ascend from, you know, the most sort of difficult and indecipherable closest to what the machine is doing to the more abstract.

We've now layered a layer on top of whatever it is, Java, you know, Python, whatever, and it's just I need a program that will do this. It's just natural language programming. Even if that were the only use case, that strikes me as, like, insanely powerful, seismic and hugely maybe dislocating for the millions of people who have jobs in software engineering.

Kate Crawford: Again, one of the stories that's not getting enough attention is if we think about the last decade, if we've had so many campaigns for, let's get girls to code. Let's, like, think about equity about, like, teaching people computer science because this is where you're going to have, you know, real capacity to be hired and to create new companies and, you know, create innovation.

But this as a step into an industry is being rapidly eroded, which is not to say that programmers are no longer useful, but it's that a lot of these basic programming skills have just been automated. That has already happened.

So, we have to ask a whole lot of new questions now around, particularly, the white-collar jobs that were seen as aspirational for so many people are the ones that are actually looking to be most vulnerable to automation right now.

I mean, it's interesting. I was giving a talk with a few ministers recently who are announcing a portfolio that was saying, look, we are so excited around the creative industries because they're really resilient to automation. I mean, I hated to be a Debbie Downer, but I was, like, guys, I'm sorry. Like, they are first in line right now in terms of what we're looking at in terms of job transformation. Programmers, you know, and creatives are looking at seeing their skills replicated by systems that will cost next to nothing.

Chris Hayes: More of our conversation after this quick break.

(ADVERTISEMENT)

Chris Hayes: OK. So, let's talk about, first, your (ph) associates doing, already, there's been lots of automation of, like, discovery which used to be a very, like, labor intensive and sort of brutally drudge work. But, you know, up the chain, writing briefs, writing legal memos.

Again, if I were a law firm partner now, would I get ChatGPT to do it right now as it currently is? No. Does it seem plausible in two or three years it's going to be good enough? Seems plausible.

Marketing copy, think of all the marketing copy that exists in the world. I mean, people produce that. They get paid to do it. I mean, marketing copy seems like a very obvious point to just automate.

Computer programmers, you know, all kinds of content. I mean, people that write manuals about stuff, particularly sophisticated technical manuals. I mean, it just seems like there's millions of common white-collar jobs or knowledge jobs or symbol manipulation jobs, there's a whole bunch of different jobs that just seem like they are really suddenly threatened with automation.

And I guess there's two ways to think about this. One is, OK, well, this has been happening since the dawn of modernity and, you know, people used to hand sew and then there were sewing machines and on and on and on. The others, this represents something different and it's not just a continuation of that. And I guess how do you think about automation and the threat of automation in terms of dislocation and its social effects and economic effects?

Kate Crawford: So, we could turn to OpenAI's own prediction here, which is that they say 80 percent of jobs are going to be automated in some way by these systems. That is a staggering prediction.

Goldman Sachs just released a report this month saying 300 jobs in the U.S. are looking at, you know, very serious forms of automation impacting what they do from day-to-day. So, I mean, it's staggering when you start to look at these numbers, right?

So, the thing that I think is interesting is to think about this historically, right? We could think about the Industrial Revolution. It takes a while to build factory machinery and train people on how things work. We could think about the transformations that happened in the sort of early days of the personal computer. Again, a slow and gradual rollout as people began to incorporate this technology.

The opposite is happening here. We've seen with ChatGPT had the fastest uptake to a hundred million users of any technology that we've seen, faster than TikTok, faster than Instagram, faster than Facebook.

We're also looking at the ability of these systems to scale and be implemented very easily. Like it's not going to take a slow roll or people have to build more printing presses or, you know, create more, you know, production lines.

This is a thing where you start plugging in GPT APIs into a whole lot of things that you do and that could happen in the next, honestly, 18 months. I think that's the kind of time frame that we're looking at.

And I don’t think, certainly in my lifetime, I haven't seen anything happen that quickly. And I think in terms of what that means, in terms of the social changes that will come with this, in terms of the shifts in what people can expect to be their jobs and what they do, but also the literacy that is needed to go along with the systems to know what are they good at, what are they actually really bad at, and what is the difference when a new model comes out.

I mean, we've seen now there are different types of models. GPT-4, much better than ChatGPT. We've seen Bard come out. We’ve seen, like, all of these new systems. You know, Facebook is now entering the fray.

And now, we have open-source models of LLMs, which means in completely different tooling, completely different guardrails, in some cases, no guardrails, which means that all of that filtering that you see lots of people working hard on at OpenAI, Microsoft and Google, et cetera, a lot of that stuff can just go away if you train this differently.

So, you're going to have an ecology of really different systems and very opaque to most users as to which ones work better for what. And that opacity, that lack of literacy, that sort of speed all happening at the same time raises a lot of very real red flags for me.

Chris Hayes: Well, like what?

Kate Crawford: Well, I mean, we can go to the obvious ones. The obvious ones, and we've talked a bit about labor displacement. We could talk about misinformation, where here in the U.S. heading towards a very significant election next year. I am very worried about what this means for a misinformation ecosystem that is basically having, like, gas put on it and being lit in the next 12 months.

I think the ability for bad actors to do prompt injections to start to exploit the security vulnerabilities are very real here. I mean, the interesting thing is you can literally go to the companies who are building these systems, who publish their own risk scenarios, and these are pretty significant risk scenarios.

I mean, OpenAI published this extraordinary thing called the System Card, which actually articulates all of the things that they think could go wrong. And it's a long list, including some quite subtle things. Like you can bake in an ideology, a worldview, into a chat system and it's a very subtle thing, but it's extremely difficult to pull out.

And a lot of that gets back to how we train them. If you're a training system on a lot of Reddit data, you're going to have a kind of Reddity worldview, which is primarily populated by, you know, dudes in their 20s, you know, talking about stuff that they think is cool. That's going to be --

Chris Hayes: Right.

Kate Crawford: -- the worldview that you've baked into your bot. So, this is a much more subtle question than by season discrimination, which many of us have researched for a long time. This is about deep political worldviews and ideologies that come along with the systems unquestioned.

Chris Hayes: There seems to me also just a prior question to even that, which is just accuracy. One thing that's interesting about these sort of AI tools so far, and someone described this in the internet as, like, the way to think about their answer is what would an answer to this sound like.

Kate Crawford: Right.

Chris Hayes: And when you think about it that way, it's like, right, it's delivering what an answer would sound like. It just might not be correct in the world. The accuracy question seems to me are really fundamental one and a little bit of a last-mile problem for the tech because, like, you can't be 95 percent in a legal brief and you can say, well, you can have a human layer, but a human layer of stuff that’s confidently pronounced wrong is dangerous.

Kate Crawford: Right.

Chris Hayes: And so, I just wonder, do we think the accuracy question is a trivial or nontrivial tech question? Meaning, should that just go away as it gets better and the models get to better in the training or is there something deeper there in the same way that, like, getting to an actual self-driving car's proven elusive? Is there an accuracy problem here that there's reason to believe will persist?

Kate Crawford: Well, it's interesting because I have a view on this but it's, first of all, important to see the scale of how this is playing out because this is a really good moment to see how the social adoption of these tools is going to have some very real speed bumps.

So, an example is a couple of weeks ago, a journalist reached out to me who was doing a profile on the podcaster Lex Fridman and as part --

Chris Hayes: Yup.

Kate Crawford: -- of her research, she typed into ChatGPT, who is Lex Fridman's biggest critic, and it confidently said, number one was Kate Crawford. And she said, oh, what articles has she written? And it confidently said, here are the articles that she's written with citations and actual links that you could click on, and she clicked on it.

She's like, oh, this is a real journal. I mean, this sublink isn't actually working but maybe the articles just moved on, so I'll reach out to Kate. It also summarized what the articles were saying. Kind of sounded plausible. They sound like, kind of, the things that I would probably say.

But I have to say to her, look, I'm really sorry, I have never said anything about Lex Fridman, haven’t been on his show, don't really have a view to add to your piece. Sorry, this is --

Chris Hayes: Wow.

Kate Crawford: -- completely wrong. Confidently wrong but completely wrong. And so, that moment, for her, it's interesting.

Now, we know, we've talked about why LLMs work that way, it's not technically surprising, it's doing, really, a sort of word prediction task here. It's doing patent detection, right?

So, it's not a bad --

Chris Hayes: Yeah. There's a plausible world in which you have written these articles.

Kate Crawford: That’s it.

Chris Hayes: Right. Yeah.

Kate Crawford: Exactly, fairly plausible. And keep in mind that ChatGPT is an ungrounded model. So, we could look at Bing as a more grounded model. We can look at what's, you know, being built as a grounding in search results --

Chris Hayes: Sorry, what's that mean, ungrounded?

Kate Crawford: OK. So, difference is, like, ChatGPT is working off a corpus of training data that essentially stops in 2021 and it's not checking it against search results. So, it's not doing this --

Chris Hayes: Got you. Got you.

Kate Crawford: -- verification, right? So, some of the newer versions are much sort of more updated and are grounded in search data. So, that means you'll have much better accuracy and better performance.

But in this case, most people are using ChatGPT, right? And so, in her example, she was like, this sounds plausible, I need a comment from you for my piece. I'm like, OK, it's absolutely not true.

So, the issue is how are you supposed to know when something is giving you a grounded answer or an ungrounded answer.

Chris Hayes: Yes. This is a big question.

Kate Crawford: This is a big question.

Chris Hayes: And a huge problem.

Kate Crawford: It's a huge problem. And so, it's funny. Like at that time, I called this like rather than a hallucination, which is the common kind of tech word when these things make stuff up, I think we could call this a hallucitation, like, it's essentially making up citations, which it does extremely well in an ungrounded model.

So, a lot of people are going to be taken in by this, which contributes to the concerns around misinformation and this sort of ecology of just things that are wrong getting published and getting circulated and being understood as facts, which interestingly then go on to be scraped from the internet to train the next model.

So, we can start to --

Chris Hayes: Right.

Kate Crawford: -- think about what happens --

Chris Hayes: Right.

Kate Crawford: -- when more and more of the internet is actually generated material. But to answer your more important question, which is, is it going to go away, is this fixable? A lot of people are working on that problem right now.

The grounding definitely helped. So, that's why you'll see GPT-4 is much better at these sorts of things than ChatGPT. But is it ever going to be perfect? That is currently an unsolved technical problem. We don't yet know the answer to that.

Chris Hayes: You just said something that I think about a lot which is, OK, we got this thing called the internet that’s, you know, in a technical sense, it's probably 50 or 60 years old. But in the sort of a real, like, use way, you know, let's say early '90s, right, that’s how long I've been using the internet, but I'll just generationally say that.

So, you’ve got trillions of data points in that internet that's been created by people and people have created that information and put it there for a bunch of really interesting human motivations that have to do with, like, desire for recognition, desire for attention. I'm writing a book on that topic right now. So, it's something I've been thinking about. That's the data set you're training on.

Kate Crawford: Right.

Chris Hayes: Now, we're going to have AI creating more and more content. Like, it does seem you're having, you know, making a tape of a tape of a tape problem at a certain point because there is an uncanny valley quality to AI prose right now.

Again, it's a totally plausible facsimile. Sometimes, it really nails it. But generally, it has an uncanny valley feel to it. It's not wrong in a lot of cases. It's just totally 10 degrees off. The internet starts getting populated everywhere with AI content and as you said, that becomes the training set. What is that going to do?

Kate Crawford: I think about this as the inception problem, right, which is that you have one version of reality which is then sort of dropping to another version which is partly kind of based on just AI generations of images and text and video.

And then we'll have another version where we start to creep closer and closer to a place where most of what you see online is generated. You know, honestly, they're like the CEO of Runway (ph).

Chris Hayes: It sucks. I don’t want that.

Kate Crawford: I know. It's like --

Chris Hayes: I don’t want that.

Kate Crawford: Well, it presents --

Chris Hayes: I don’t want that.

Kate Crawford: -- a lot of problems. It also presents a lot of problems for this, you know, tech sector that is generating its wealth through scraping, you know, the commons of, you know, the public internet, right?

So, how does that work if you are starting to get stuff that's really just got that uncanny valley vibe baked in? And what does that mean for images? What does that mean for the idea that this training data constitutes ground truths in any way?

I mean, I would say that there are core epistemological philosophical questions about using the internet as ground truth in the first place. But when it really --

Chris Hayes: Sure.

Kate Crawford: -- gets wild is when AI-generated content is so everywhere that you can't tell the difference in terms of what's what. Then you're going to start to see some serious issues around, can the performance keep increasing?

We might start to see a bit of a ceiling --

Chris Hayes: Right.

Kate Crawford: -- there, this may not be an infinite horizon that's opening up.

Chris Hayes: We'll be right back after we take this quick break.

(ADVERTISEMENT)

Chris Hayes: You know, it's interesting when I think about the sort of basic probabilistic, sort of brute force approach, the statistical turn as you put it from Robert Mercer's sort of description, one thing that I've noted is I have, of course, tried to, you know, fed it in my own show, you know, write an a block for me. Doesn’t do a very good job and I think the reason is there's just not enough data.

So, it doesn't have the benefit of millions of segments, right? There's just a few thousand and it can capture something with that little amount of data. This is, you know, I would like to see myself not replaced by a host anytime soon. So, I guess this is some small satisfaction for me.

But I also do wonder, I don't know, I guess, the way that I try to think about it is like humans find a way, even with machines, doing a lot of stuff to be useful and needed. Like, if you go to a car factory in 2023, there's a lot of human stuff.

I mean, it's been much more automated over the year, but it's a really striking thing. If you ever walk in a car factory, it's people. Now, those people are aided by a lot of robots and there's a sort of like human-machine hybrid that's putting the car together, but you need humans to build the car.

And I wonder how much like the modern industrial car assembly line, which is obviously more automated than it once was, but is not totally automated, if that's a way to think about what our sort of hybrid future might be like.

Kate Crawford: I think that’s really interesting because we can take that example in two directions. One is to look at what happened with Ford, right, sort of the, you know, creation of sort of the first (ph) car factory.

Chris Hayes: Yeah.

Kate Crawford: Really changed the relationship to human labor. Human labor --

Chris Hayes: Yes.

Kate Crawford: -- became seen as a unit that had to be efficiently allocated throughout a set of machinic processes where ultimately keeping the machines running was the most important thing. And the human value of labor really shifted to something that was less about autonomy and human dignity, but you are working on a production line and you are doing this one little thing over and over again.

Now, what's interesting about this turn and certainly with the amount, just the raw numbers of jobs that are going to be really exposed to this type of automation, is how does it change what you do doesn't mean that, you know, robots aren’t come for your job but maybe you are going to be doing more robotic work.

Maybe your job --

Chris Hayes: Right.

Kate Crawford: -- is going to turn you into more of a robot. I think this is the relationship to human labor, particularly in this moment where cognitive labor is the space of automation, not physical labor.

This is the big shift that we have to kind of get our heads around. What's really different to what we saw in the 20th century is that it's precisely these things that we saw as uniquely human that can now be replicated or at least automated, which might mean that rather than being an illustrator where you sit down and you draw a great picture of dragons for Dungeons & Dragons, you know, Volume 5, you're sitting there with Midjourney and you're writing in a bunch of prompts.

So, you become a prompt jockey and your engagement, that idea of, you know, your agency, your enjoyment, your creativity is really sitting in a different space. Now, we don’t have to valorize that. Like maybe there's some really great things that will come out of that. Totally open to that perspective.

Chris Hayes: Yeah.

Kate Crawford: I've worked with a lot of economists, including Daron Acemoglu at MIT. We recently wrote a paper looking at the last hundred years of automation and what has happened to human labor. Has it actually freed up more time for people to do more creative things or has it created more human drudgery?

And particularly, if we start to look at the digital era, you’ve seen kind of like a stagnation of productivity.

Chris Hayes: Yeah.

Kate Crawford: You’ve seen, like, actually less engagement. You know, fewer people being able to do the things that they're excited to do. We have to take that as a serious indicator here and to think about, what are the ways in which humans can come out of this, not finding themselves doing less, and less, and less.

Take for example, like, I don’t know if you’ve been to airports recently, you know, but when you go into like a news agency in an airport and used to have like a human there that you buy a newspaper, "The New Yorker," from right?

Chris Hayes: Right.

Kate Crawford: And now, you have screen that you're doing it --

Chris Hayes: Yup.

Kate Crawford: -- yourself. So, now you're --

Chris Hayes: Yeah.

Kate Crawford: -- effectively --

Chris Hayes: Yeah.

Kate Crawford: -- you know, you'll --

Chris Hayes: Yeah.

Kate Crawford: But the cost of that has been offset to you and you have a person standing there who was effectively a security guard just making sure you don't steal stuff. That is --

Chris Hayes: Slash, like, troubleshooters.

Kate Crawford: Right.

Chris Hayes: What you're describing is such a great example because it's like it's the worst of both worlds. You still need the labor of the person who's there because the machine doesn't quite work, but they're doing this, like, even worse task than maybe I think that they would be as a cashier, which is, like, helping when people doesn't work and then people are already off on the wrong foot, like, mood wise.

Anyway, I completely agree that that's the place to avoid. I mean, I guess, I don’t know, you're, obviously, a very nuanced thinker about all this and so I don't want be like is this good or bad. But I do kind of want --

Kate Crawford: Don't make me do it (ph), Chris.

(LAUGHTER)

Chris Hayes: I do kind of want that answer because I will admit something, I feel freaked out by this in a way that I haven't felt freaked out about something in a very long time. And I can tell that's me aging in the middle age where when if you're younger, new technology seem exciting and then you get older and you're, like, oh, man, you know, the world is going to hell in a handbasket and kids these days.

I don't know if it's seriously just my, like, generational experience and my age talking. But it gives me, like, a panic in my chest that I have never felt from a technology that I've interacted with. And I just wonder whether you feel that and what you think the modal outcome looks like here.

Kate Crawford: I have felt that, and I think about it almost as a type of vertigo. You know, when you're --

Chris Hayes: Yes.

Kate Crawford: -- standing on, like, one of his, like, a whirligig in, like, a fun park --

Chris Hayes: Yeah.

Kate Crawford: -- and suddenly, the speed gets really fast really quickly. And that vertiginous feeling, I think, really connects to the fact that we are seeing an exponential disruption rolled out so fast.

And honestly, in my field, I'm worried about people who are working on the technical side of AI and the socio-technical side experiencing massive burnout in the next 12 months because the amount --

Chris Hayes: Oh my God, yeah.

Kate Crawford: -- that we're all trying to stay ahead of and actually work productively within is almost inhuman. And at the same time, this is the thing that, you know, I think about a lot, we have less transparency and more power concentration in this industry than we've ever had before.

Chris Hayes: Yeah.

Kate Crawford: You were talking about a vanishingly small number of companies you can do this type of AI at scale. We're talking about really fewer than six companies in the world.

Chris Hayes: I mean, Sam Altman seems fine. I don't want Sam Altman determining the trajectory of all this. He's a tech guy. He's fine. I'm sure he's extremely smart. He seems extremely smart.

Yeah. We need to get some regulatory apparatus going here. I don't want to just smash the brakes button and, you know, there's got to be a smart way to do this. But, man, does it just feel like, I don't want our collective future just in the hands of this tiny little clique of people.

Kate Crawford: And I think you’ve really underscored the big question here, which is what kind of regulatory environment can --

Chris Hayes: Yeah.

Kate Crawford: -- we create quick enough that can actually --

Chris Hayes: Yeah.

Kate Crawford: -- engage with this, because if you look at what's happening right now in the U.S., we have one of the most permissive environments with the fewest regulatory guardrails that you can imagine. We don't even have omnibus federal privacy legislation. Like, that was a failure of the last 20 years, let alone where we are now, right?

Look at Europe. So, Europe is about to have the first major AI act come into effect and it was really drafted and prepared over the last three to four years, depending how you count. They didn't even have generative AI at this scale in their sights when they were drafting this legislation.

Chris Hayes: Right.

Kate Crawford: So, we've already kind of gone through this massive shift where, you know, regulators and legislators are kind of racing to keep up. So, I think you're right. I think we do urgently need more regulation.

But at the same time, we need to think about this as an international question because if we end up really --

Chris Hayes: Yeah.

Kate Crawford: -- just having, you know, one country having strict regulations and another not, you're just going to see all of these companies start to do a type of triage around where do they put things where they can do what they want to do. And that's really not going to address the type of concerns that I have around transparency and accountability here.

And ultimately, democracy. I mean, these are going to be so powerful that we have to ask about, you know, what is the power of this sort of small handful of tech companies. They're starting to look a lot like the power that we used to understand as, you know, vested in the nation state, except bigger because, of course, they're transnational entities.

So, we have to start to really wrestle with this idea of what does democratic participation look like when you have so much power centralized --

Chris Hayes: Yeah.

Kate Crawford: -- this way. I think we should be concerned. And that is the core issue. I'm much more worried about that than I am about a mythical super intelligence, you know, robot coming to sort of, you know, run the world. This is the thing that we should be worried about.

Chris Hayes: Kate Crawford has been thinking about, researching, writing about artificial intelligence for several decades. She studies the social and political implications of AI.

She's a Research Professor at USC Annenberg, Honorary Professor at University of Sydney, Senior Principal Researcher at Microsoft Research Lab in New York City. Also, the author of "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence."

That hour flew by. I may have you back because there's a lot more to talk about. But that was (ph) --

Kate Crawford: We barely scratched the surface, right (ph).

Chris Hayes: I know, I know. Fantastic. Thank you so much.

Kate Crawford: Such a pleasure, Chris. See you again.

Chris Hayes: Once again, great thanks to Kate Crawford. I find her work on this really fascinating. She's been at it for a while. And now, so much of what she's been thinking and writing about is sort of here at our doorstep.

You can let us know what you think about AI and our AI future and present by tweeting us with the hashtag #WITHpod, email WITHpod@gmail.com. And, of course, be sure to follow us on TikTok by searching for WITHpod.

"Why Is This Happening?" is presented by MSNBC and NBC News, produced by Doni Holloway and Brendan O'Melia, engineered by Bob Mallory and featuring music by Eddie Cooper. You can see more of our work, including links to things we mentioned here, by going to nbcnews.com/whyisthishappening.

Tweet us with the hashtag #WITHpod, email WITHpod@gmail.com. Follow us on TikTok by searching for WITHpod. “Why Is This Happening?” is presented by MSNBC and NBC News, produced by Doni Holloway and Brendan O'Melia, engineered by Bob Mallory and features music by Eddie Cooper. You can see more of our work, including links to things we mentioned here, by going to nbcnews.com/whyisthishappening.